-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge DyHPO #60
base: master
Are you sure you want to change the base?
Merge DyHPO #60
Conversation
Could we |
MFPI_Random_HiT, | ||
in_fill="best", | ||
augmented_ei=False, | ||
) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this file, not sure we should be bloating what is exposed to the main user with all possible acquisition functions exposed. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree with the idea, but currently, I don't see any easy solutions without bloating the file. I commented out a few unnecessary ones. A few more can be commented out, but currently our main form of usage explicitly refers to this dict.
Also we should probably add some basic unit tests to check for freeze-thaw sampler and MF-EI. |
We decided that this has diverged quite a bit from the existing codebase and will likely diverge further. Can you move this PR to draft to save as a reference and we can eventually rework it into what we will have? |
Merge work from a private repo on DyHPO, DPL surrogates, and related acquisition functions. Version without refactoring. All the further changes should be made under this branch.
Added Surrogates:
DeepGP
: Deep Gaussian Surrogate original implementation, paperDPL
: Deep Power Law ensemble surrogate original implementation, paperAdded Acquisition functions:
MFEI
: Multi-Fidelity Expected ImprovementMFPI
: Multi-Fidelity Probability of ImprovementMFUCB
: Multi-Fidelity Upper Confidence BoundMF{EI, UCB, PI}AtMax
: versions of each where the incumbent value is the best value at the highest fidelityMF{EI, UCB, PI}Dyna
: versions of each where the incumbent value is the best global value and model extrapolate till thehighest seen fidelity + 1
MF(EI, PI)Random
: versions of each with extrapolation at random horizons with random thresholdsAdded Optimizers:
MFEIBO
: DyHPO, DPL and many similar optimizers can be obtained by combining corresponding surrogates with acquisition functions above